With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Patients take care of what their teeth will be like after the orthodontics. Orthodontists usually describe the expectation movement based on the original smile images, which is unconvincing. The growth of deep-learning generative models change this situation. It can visualize the outcome of orthodontic treatment and help patients foresee their future teeth and facial appearance. While previous studies mainly focus on 2D or 3D virtual treatment outcome (VTO) at a profile level, the problem of simulating treatment outcome at a frontal facial image is poorly explored. In this paper, we build an efficient and accurate system for simulating virtual teeth alignment effects in a frontal facial image. Our system takes a frontal face image of a patient with visible malpositioned teeth and the patient's 3D scanned teeth model as input, and progressively generates the visual results of the patient's teeth given the specific orthodontics planning steps from the doctor (i.e., the specification of translations and rotations of individual tooth). We design a multi-modal encoder-decoder based generative model to synthesize identity-preserving frontal facial images with aligned teeth. In addition, the original image color information is used to optimize the orthodontic outcomes, making the results more natural. We conduct extensive qualitative and clinical experiments and also a pilot study to validate our method.
translated by 谷歌翻译
Recently, segmentation-based methods are quite popular in scene text detection, which mainly contain two steps: text kernel segmentation and expansion. However, the segmentation process only considers each pixel independently, and the expansion process is difficult to achieve a favorable accuracy-speed trade-off. In this paper, we propose a Context-aware and Boundary-guided Network (CBN) to tackle these problems. In CBN, a basic text detector is firstly used to predict initial segmentation results. Then, we propose a context-aware module to enhance text kernel feature representations, which considers both global and local contexts. Finally, we introduce a boundary-guided module to expand enhanced text kernels adaptively with only the pixels on the contours, which not only obtains accurate text boundaries but also keeps high speed, especially on high-resolution output maps. In particular, with a lightweight backbone, the basic detector equipped with our proposed CBN achieves state-of-the-art results on several popular benchmarks, and our proposed CBN can be plugged into several segmentation-based methods. Code will be available on https://github.com/XiiZhao/cbn.pytorch.
translated by 谷歌翻译
Visual anomaly detection plays a crucial role in not only manufacturing inspection to find defects of products during manufacturing processes, but also maintenance inspection to keep equipment in optimum working condition particularly outdoors. Due to the scarcity of the defective samples, unsupervised anomaly detection has attracted great attention in recent years. However, existing datasets for unsupervised anomaly detection are biased towards manufacturing inspection, not considering maintenance inspection which is usually conducted under outdoor uncontrolled environment such as varying camera viewpoints, messy background and degradation of object surface after long-term working. We focus on outdoor maintenance inspection and contribute a comprehensive Maintenance Inspection Anomaly Detection (MIAD) dataset which contains more than 100K high-resolution color images in various outdoor industrial scenarios. This dataset is generated by a 3D graphics software and covers both surface and logical anomalies with pixel-precise ground truth. Extensive evaluations of representative algorithms for unsupervised anomaly detection are conducted, and we expect MIAD and corresponding experimental results can inspire research community in outdoor unsupervised anomaly detection tasks. Worthwhile and related future work can be spawned from our new dataset.
translated by 谷歌翻译
受益于大规模预处理的视觉语言模型(VL-PMS),视觉问答的性能(VQA)已开始接近人类的甲骨文表现。但是,对VQA数据有限的大规模VL-PM的固定通常面临过度拟合和泛化问题,从而导致缺乏健壮性。在本文中,我们旨在提高VQA系统的鲁棒性(即,当系统对VQA的VL-PMS进行验证时,从信息瓶颈的角度来看,系统能够防御投入变化和人类对抗攻击的能力)。通常,通过VL-PMS获得的内部表示不可避免地包含有关下游VQA任务的无关和冗余信息,从而导致统计上的虚假相关性和对输入变化的不敏感性。为了鼓励表示形式收敛到视觉学习中的足够统计量,我们提出了相关信息瓶颈(CIB)原则,该原则通过最大程度地减少投入和内部表示之间的相互信息(MI)来寻求表示压缩和冗余之间的权衡。同时最大化输出和表示之间的MI。同时,CIB通过对称的关节MI估计来测量视觉和语言输入和表示之间的内部相关性。对五个VQA的投入鲁棒性和两个VQA基准的大量实验证明了拟议CIB在改善VQA系统鲁棒性方面的有效性和优越性。
translated by 谷歌翻译
单眼深度估计(MDE)由于其低成本和机器人任务的关键功能,例如定位,映射和障碍物检测而吸引了激烈的研究。经过深入学习的发展,监督的方法已取得了巨大的成功,但它们依靠大量的地面深度注释,这些深度昂贵。无监督的域适应性(UDA)将知识从标记的源数据转移到未标记的目标数据,以放大监督学习的约束。但是,由于域移位问题,现有的UDA方法可能无法完全跨不同数据集的域差距对齐。我们认为,可以通过精心设计的特征分解来实现更好的域对齐。在本文中,我们提出了一种针对MDE的新型UDA方法,称为适应的学习功能分解(LFDA),该方法学会将功能空间分解为内容和样式组件。 LFDA仅尝试对齐内容组件,因为它具有较小的域间隙。同时,它不包括针对源域的样式组件,而不是训练主要任务。此外,LFDA使用单独的特征分布估计来进一步弥合域间隙。在三个域适应性MDE方案上进行了广泛的实验表明,与最先进的方法相比,所提出的方法可实现卓越的准确性和较低的计算成本。
translated by 谷歌翻译
现实世界中的数据通常遵循长尾巴的分布,其中一些多数类别占据了大多数数据,而大多数少数族裔类别都包含有限数量的样本。分类模型最小化跨凝结的努力来代表和分类尾部类别。尽管已经对学习无偏分类器的学习问题进行了充分的研究,但代表不平衡数据的方法却没有探索。在本文中,我们专注于表示不平衡数据的表示。最近,受到监督的对比学习最近在平衡数据上表现出了有希望的表现。但是,通过我们的理论分析,我们发现对于长尾数据,它未能形成常规的单纯形,这是代表学习的理想几何配置。为了纠正SCL的优化行为并进一步改善了长尾视觉识别的性能,我们提出了平衡对比度学习(BCL)的新型损失。与SCL相比,我们在BCL:类平均水平方面有两个改进,可以平衡负类的梯度贡献。课堂组合,允许所有类都出现在每个迷你批次中。提出的平衡对比度学习(BCL)方法满足形成常规单纯形的条件并有助于跨透明拷贝的优化。配备了BCL,提出的两分支框架可以获得更强的特征表示,并在诸如CIFAR-10-LT,CIFAR-100-LT,Imagenet-LT和Inaturalist2018之类的长尾基准数据集上实现竞争性能。我们的代码可在\ href {https://github.com/flamiezhu/bcl} {this url}中获得。
translated by 谷歌翻译
广义的零射击学习(GZSL)旨在通过将语义知识从看见的类别转移到看不见的阶级来识别所见类和看不见的类别的图像。这是一个有希望的解决方案,可以利用生成模型的优势,以根据从所见类中学到的知识来幻觉现实的看不见的样本。但是,由于产生的变化,大多数现有方法的合成样本可能从看不见的数据的实际分布中偏离。为了解决这个问题,我们提出了一个基于流动的生成框架,该框架由多种条件仿射耦合层组成,用于学习看不见的数据生成。具体而言,我们发现并解决了触发产生转移的三个潜在问题,即语义不一致,方差崩溃和结构障碍。首先,为了增强生成样品中语义信息的反射,我们将语义信息明确嵌入到每个条件仿射耦合层中的转换中。其次,为了恢复真正看不见的特征的固有差异,我们引入了一种边界样本挖掘策略,具有熵最大化,以发现语义原型的更困难的视觉变体,并在此调整分类器的决策边界。第三,提出了一种相对定位策略来修改属性嵌入,引导它们充分保留类间的几何结构,并进一步避免语义空间中的结构障碍。四个GZSL基准数据集的广泛实验结果表明,GSMFlow在GZSL上实现了最先进的性能。
translated by 谷歌翻译
视频问题应答(VideoQA),旨在基于了解多模态视频内容正确回答给定的问题,由于视频内容丰富,这是具有挑战性的。从视频理解的角度来看,良好的视频仪框架需要了解不同语义级别的视频内容,并灵活地将不同的视频内容集成到蒸馏问题相关内容。为此,我们提出了一个名为Livlr的轻量级视觉语言推理框架。具体地,Livlr首先利用基于图形的视觉和语言编码器来获得多粒度的视觉和语言表示。随后,所获得的表示与设计的分集感知视觉语言推理模块(DAVL)集成。 DAVL考虑不同类型的表示之间的差异,并且在生成问题相关的联合表示时可以灵活地调整不同类型表示的重要性,这是一种有效和一般的表示集成方法。拟议的LIVLR轻量级,并在两个VideoQ基准,MRSVTT-QA和了解VQA上显示了其性能优势。广泛的消融研究证明了LIVLR关键部件的有效性。
translated by 谷歌翻译
令牌词汇的选择会影响机器翻译的性能。本文旨在弄清楚什么是良好的词汇,以及没有试用培训的最佳词汇。为了回答这些问题,我们首先为从信息理论的角度提供了对词汇的作用的替代理解。这是由此激励,我们制定了词汇化的追求 - 找到了具有正确尺寸的最佳令牌词典 - 作为最佳运输(OT)问题。我们提出Volt,简单而有效的解决方案,没有试用培训。经验结果表明,在不同场景中,Volt优于广泛使用的词汇,包括WMT-14英语 - 德语和TED的52翻译方向。例如,伏特达到近70%的词汇量减少和英语 - 德语翻译中的0.5个BLEU增益。此外,与BPE搜索相比,Volt在英语 - 德语翻译中将搜索时间从384 GPU小时从384 GPU小时到30 GPU小时。代码在https://github.com/jingjing-nlp/volt上获得。
translated by 谷歌翻译